7,736 research outputs found
Recommended from our members
Street Egohood: An Alternative Perspective of Measuring Neighborhood and Spatial Patterns of Crime
Objectives: The current study proposes an approach that accounts for the importance of streets while at the same time accounting for the overlapping spatial nature of social and physical environments captured by the egohood approach. Our approach utilizes overlapping clusters of streets based on the street network distance, which we term street egohoods. Methods: We used the street segment as a base unit and employed two strategies in clustering the street segments: (1) based on the First Order Queen Contiguity; and (2) based on the street network distance considering physical barriers. We utilized our approaches for measuring ecological factors and estimated crime rates in the Los Angeles metropolitan area. Results: We found that whereas certain socio-demographics, land use, and business employee measures show stronger relationships with crime when measured at the smaller street based unit, a number of them actually exhibited stronger relationships when measured using our larger street egohoods. We compared the results for our three-sized street egohoods to street segments and two sizes of block egohoods proposed by Hipp and Boessen (Criminology 51(2):287–327, 2013) and found that two egohood strategies essentially are not different at the quarter mile egohood level but this similarity appears lower when looking at the half mile egohood level. Also, the street egohood models are a better fit for predicting violent and property crime compared to the block egohood models. Conclusions: A primary contribution of the current study is to develop and propose a new perspective of measuring neighborhood based on urban streets. We empirically demonstrated that whereas certain socio-demographic measures show the strongest relationship with crime when measured at the micro geographic unit of street segments, a number of them actually exhibited the strongest relationship when measured using our larger street egohoods. We hope future research can use egohoods to expand understanding of neighborhoods and crime
Recommended from our members
Micro-Scale, Meso-Scale, Macro-Scale, and Temporal Scale: Comparing the Relative Importance for Robbery Risk in New York City
We compare the relative importance of four dimensions for explaining the micro location of robberies: 1) the micro spatial scale of street segments; 2) the meso spatial scale surrounding the street segment; 3) the temporal pattern, and 4) the macro-scale of the surrounding 2.5 miles. This study uses crime, business, and land use data from New York City and aggregates it to street segments and hours of the day. Although the measures capturing the micro-scale of the street segment explained the largest amount of unique variance, the measures capturing temporal scale across hours of the day (and weekdays) explained the next largest amount of unique variance. The measures of the characteristics in the 2.5 miles macro scale explained the next largest amount of unique variance, and combined with the measures at the meso-scale explained nearly as much of the variance as the street segment measures
Studying Double Charm Decays of B_{u,d} and B_{s} Mesons in the MSSM with R-parity Violation
Motivated by the possible large direct CP asymmetry of \bar{B}^0_d \to D^+
D^- decay measured by Belle collaboration, we investigate double charm B_{u,d}
and B_s decays in the minimal supersymmetric standard model with R-parity
violation. We derive the bounds on relevant R-parity violating couplings from
the current experimental data, which show quite consistent measurements among
relative collaborations. Using the constrained parameter spaces, we explore
R-parity violating effects on other observables in these decays, which have not
been measured or have not been well measured yet. We find that the R-parity
violating effects on the mixing-induced CP asymmetries of \bar{B}^0_d \to
D^{(*)+} D^{(*)-} and \bar{B}^0_s \to D^{(*)+}_s D^{(*)-}_s decays could be
very large, nevertheless the R-parity violating effects on the direct CP
asymmetries could not be large enough to explain the large direct CP violation
of \bar{B}^0_d \to D^{+} D^{-} from Belle. Our results could be used to probe
R-parity violating effects and will correlate with searches for direct R-parity
violating signals in future experiments.Comment: 28 pages and 6 figures, matches published versio
The macroeconomic consequences of Scottish fiscal autonomy: inverted haavelemo effects in a general equilibrium analysis of the tartan tax
In 1997 the Scottish people voted both for the creation of a legislative Parliament and to endow the Parliament with tax-varying powers. The establishment of the Scottish Parliament in 2000 heralded the most radical innovation in the regional fiscal system in modern U.K. history. This development has been the subject of considerable controversy, however, especially in respect of the decision to afford the Parliament the power to alter the basic rate of income tax by up to 3p in either direction. The fact that Scotland, at least according to official data, receives a substantial net fiscal transfer from the rest of the UK, and has traditionally had higher public expenditure per capita than England, leads most commentators to believe that the power to change the standard rate will, in practice, be restricted to the power to increase it (Blow et al, 1996; McGregor et al 1997). Accordingly, while the Parliament allows the use of the power to generate a balancedbudget contraction in expenditure, we focus here on the impact of a balanced-budget fiscal expansion. While Labour, SNP and the Liberal Democrats in Scotland all supported the introduction of a Parliament with tax-raising powers, the Conservatives labelled this scheme the “tartan tax” and claim that its use would be detrimental to Scotland, leading to a reduction in Scottish employment and to net out-migration. This political controversy, together with the national Labour Party’s desire to shed its reputation as a Party of high taxation, in part accounted for the Scottish Labour Party’s commitment not to exercise the tax-varying power during the lifetime of the first Scottish Parliament, despite the fact that others have meanwhile been vigorously arguing the case for full fiscal autonomy. In this paper we focus primarily on the consequences for the Scottish economy if the Parliament chooses to exercise the degree of fiscal autonomy that it already possesses. However, the factors that govern the likely macroeconomic impact of a balanced budget change also prove critical to the analysis of any region-specific tax or expenditure change, whether generated as a consequence of, for example, rigorous adherence to the Barnett formula (that, at least in principle, governs the allocation of government expenditure to the devolved authorities in the UK, et al 2003, 2007) or movement towards greater fiscal autonomy. Accordingly, we also identify the implications of our analysis for the wider debate on regional fiscal issues in general and greater fiscal autonomy in particular
Calibration of Distributionally Robust Empirical Optimization Models
We study the out-of-sample properties of robust empirical optimization
problems with smooth -divergence penalties and smooth concave objective
functions, and develop a theory for data-driven calibration of the non-negative
"robustness parameter" that controls the size of the deviations from
the nominal model. Building on the intuition that robust optimization reduces
the sensitivity of the expected reward to errors in the model by controlling
the spread of the reward distribution, we show that the first-order benefit of
``little bit of robustness" (i.e., small, positive) is a significant
reduction in the variance of the out-of-sample reward while the corresponding
impact on the mean is almost an order of magnitude smaller. One implication is
that substantial variance (sensitivity) reduction is possible at little cost if
the robustness parameter is properly calibrated. To this end, we introduce the
notion of a robust mean-variance frontier to select the robustness parameter
and show that it can be approximated using resampling methods like the
bootstrap. Our examples show that robust solutions resulting from "open loop"
calibration methods (e.g., selecting a confidence level regardless of
the data and objective function) can be very conservative out-of-sample, while
those corresponding to the robustness parameter that optimizes an estimate of
the out-of-sample expected reward (e.g., via the bootstrap) with no regard for
the variance are often insufficiently robust.Comment: 51 page
Approximategeneralized Jensen typemappings in proper Lie CQ*-algebras
In this paper, we investigate the stability problems for proper Lie derivations associated to the generalized Jensen typefunctional equation in a proper Lie CQ*-algebra
A FIXED POINT APPROACH TO THE STABILITY OF GENERAL QUADRATIC EULER-LAGRANGE FUNCTIONAL EQUATIONS IN INTUITIONISTIC FUZZY SPACES
In this paper, we prove the generalized Hyers-Ulam stability of a general k-quadratic Euler-Lagrange functional equation:for any fixed positive integer in intuitionistic fuzzy normed spaces using a fixed point method
- …